13 research outputs found

    Selective Sharing is Caring: Toward the Design of a Collaborative Tool to Facilitate Team Sharing

    Get PDF
    Temporary teams are commonly limited by the amount of experience with their new teammates, leading to poor understanding and coordination. Collaborative tools can promote teammate team mental models (e.g., teammate attitudes, tendencies, and preferences) by sharing personal information between teammates during team formation. The current study utilizes 89 participants engaging in real-world temporary teams to better understand user perceptions of sharing personal information. Qualitative and quantitative results revealed unique findings including: 1) Users perceived personality and conflict management style assessments to be accurate and sharing these assessments to be helpful, but had mixed perceptions regarding the appropriateness of sharing; 2) Users of the collaborative tool had higher perceptions of sharing in terms of helpfulness and appropriateness; and 3) User feedback highlighted the need for tools to selectively share less data with more context to improve appropriateness and helpfulness while reducing the amount of time to read

    Understanding Human-AI Cooperation Through Game-Theory and Reinforcement Learning Models

    Get PDF
    For years, researchers have demonstrated the viability and applicability of game theory principles to the field of artificial intelligence. Furthermore, game theory has been shown as a useful tool for researching human-machine interaction, specifically their cooperation, by creating an environment where cooperation can initially form before reaching a continuous and stable presence in a human-machine system. Additionally, recent developments in reinforcement learning artificial intelligence have led to artificial agents cooperating more efficiently with humans, especially in more complex environments. This research conducts an empirical study to understand how different modern reinforcement learning algorithms and game theory scenarios could create different cooperation levels in human-machine teams. Three different reinforcement learning algorithms (Vanilla Policy Gradient, Proximal Policy Optimization, and Deep Q-Network) and two different game theory scenarios (Hawk Dove and Prisoners dilemma) were examined in a large-scale experiment. The results indicated that different reinforcement learning models interact differently with humans with Deep-Q engendering higher cooperation levels. The Hawk Dove game theory scenario elicited significantly higher levels of cooperation in the human-artificial intelligence system. A multiple regression using these two independent variables also found a significant ability to predict cooperation in the human-artificial intelligence systems. The results highlight the importance of social and task framing in human-artificial intelligence systems and noted the importance of choosing reinforcement learning models

    Understanding the Role of Trust in Human-Autonomy Teaming

    Get PDF
    This study aims to better understand trust in human-autonomy teams, finding that trust is related to team performance. A wizard of oz methodology was used in an experiment to simulate an autonomous agent as a team member in a remotely piloted aircraft system environment. Specific focuses of the study were team performance and team social behaviors (specifically trust) of human-autonomy teams. Results indicate 1) that there are lower levels of trust in the autonomous agent in low performing teams than both medium and high performing teams, 2) there is a loss of trust in the autonomous agent across low, medium, and high performing teams over time, and 3) that in addition to the human team members indicating low levels of trust in the autonomous agent, both low and medium performing teams also indicated lower levels of trust in their human team members

    The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams

    Get PDF
    This study improves the understanding of trust in human-AI teams by investigating the relationship of AI teammate ethicality on individual outcomes of trust (i.e., monitoring, confidence, fear) in AI teammates and human teammates over time. Specifically, a synthetic task environment was built to support a three person team with two human teammate and one AI teammate (simulated by a confederate). The AI teammate performed either an ethical or unethical action in three missions and measures of trust in the human and AI teammates were taken after each mission. Results from the study revealed that unethical actions by the AT had a significant effect on nearly all of the outcomes of trust measured and that levels of trust were dynamic over time for both the AI and human teammates, with the AI teammate recovering trust to Mission 1 levels by Mission 3. AI ethicality was mostly unrelated to participants trust in their fellow human teammate but did decrease perceptions of fear, paranoia, and skepticism in them and trust in the human and AI teammate was not significantly related to individual performance outcomes, which both diverge from previous trust research in human-AI teams utilizing competency-based trust violations

    Clemson University’s Teacher Learning Progression Program: Personalized Advanced Credentials for Teachers

    Get PDF
    This chapter provides an overview of Clemson University\u27s Teacher Learning Progression program, which offers participating middle school science, technology, engineering, and/or mathematics (STEM) teachers with personalized advanced credentials. In contrast to typical professional development (PD) approaches, this program identifies individualized pathways for PD based on teachers\u27 unique interests and needs and offers PD options through the use of a “recommender system”—a system providing context-specific recommendations to guide teachers toward the identification of preferred PD pathways and content. In this chapter, the authors introduce the program and highlight (1) the data collection and instrumentation needed to make personalized PD recommendations, (2) the recommender system, and (3) the personalized advanced credential options. The authors also discuss lessons learned through initial stages of project implementation and consider future directions for the use of recommender systems to support teacher PD, considering both research and applied implications and settings

    Investigating Team Coordination in Baseball Using a Novel Joint Decision Making Paradigm

    Get PDF
    A novel joint decision making paradigm for assessing team coordination was developed and tested using baseball infielders. Balls launched onto an infield at different trajectories were filmed using four video cameras that were each placed at one of the typical positions of the four infielders. Each participant viewed temporally occluded videos for one of the four positions and were asked to say either “ball” if they would attempt to field it or the name of the bag that they would cover. The evaluation of two experienced coaches was used to assign a group coordination score for each trajectory and group decision times were calculated. Thirty groups of 4 current college baseball players were: (i) teammates (players from same team/view from own position), (ii) non-teammates (players from different teams/view from own position), or (iii) scrambled teammates (players from same team/view not from own position). Teammates performed significantly better (i.e., faster and more coordinated decisions) than the other two groups, whereas scrambled teammates performed significantly better than non-teammates. These findings suggest that team coordination is achieved through both experience with one’s teammates’ responses to particular events (e.g., a ball hit up the middle) and one’s own general action capabilities (e.g., running speed). The sensitivity of our joint decision making paradigm to group makeup provides support for its use as a method for studying team coordination

    Task, Usability, and Error Analyses of Ambulance-based Telemedicine for Stroke Care

    Get PDF
    Past research has established that telemedicine improves stroke care through decreased time to treatment and more accurate diagnoses. The goals of this study were to 1) study how clinicians complete stroke assessment using a telemedicine system integrated in ambulances, 2) determine potential errors and usability issues when using the system, and 3) develop recommendations to mitigate these issues. This study investigated use of a telemedicine platform to evaluate a stroke patient in an ambulance with a geographically distributed caregiving team comprised of a paramedic, nurse, and neurologist. It first determined the tasks involved based on 13 observations of a simulated stroke using 39 care providers. Based on these observational studies, a Hierarchical Task Analysis (HTA) was developed, and subsequently, a heuristic evaluation was conducted to determine the usability issues in the interface of the telemedicine system. This was followed by a Systematic Human Error Reduction and Prediction Approach (SHERPA) to determine the possibility of human error while providing care using the telemedicine work system. The results from the HTA included 6 primary subgoals categorizing the 97 tasks to complete the stroke evaluation. The heuristic evaluation found 123 unique violations to heuristics, with an average severity of 2.38. One hundred and thirty-one potential human errors were found with SHERPA, the two most common being miscommunication and selecting an incorrect option. Several recommendations are proposed, including improvement of labeling, consistent formatting, rigid or suggested formatting for data input, automation of task structure and camera movement, and audio/visual improvements to support communication

    Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature

    No full text
    ObjectiveWe define human–autonomy teaming and offer a synthesis of the existing empirical research on the topic. Specifically, we identify the research environments, dependent variables, themes representing the key findings, and critical future research directions.BackgroundWhereas a burgeoning literature on high-performance teamwork identifies the factors critical to success, much less is known about how human–autonomy teams (HATs) achieve success. Human–autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents. Autonomous agents involve a degree of self-government and self-directed behavior (agency), and autonomous agents take on a unique role or set of tasks and work interdependently with human team members to achieve a shared objective.MethodWe searched the literature on human–autonomy teaming. To meet our criteria for inclusion, the paper needed to involve empirical research and meet our definition of human–autonomy teaming. We found 76 articles that met our criteria for inclusion.ResultsWe report on research environments and we find that the key independent variables involve autonomous agent characteristics, team composition, task characteristics, human individual differences, training, and communication. We identify themes for each of these and discuss the future research needs.ConclusionThere are areas where research findings are clear and consistent, but there are many opportunities for future research. Particularly important will be research that identifies mechanisms linking team input to team output variables

    The Impact of Training on Human–Autonomy Team Communications and Trust Calibration

    No full text
    ObjectiveThis work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions.BackgroundHuman–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust.MethodThirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected.ResultsTeams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition.ConclusionsTraining based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust.ApplicationsTeam training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust
    corecore